Variational expectation-maximization training for Gaussian networks

نویسندگان

  • Nikolaos Nasios
  • Adrian G. Bors
چکیده

This paper introduces variational expectation-maximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEM-based learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum log-likelihood estimators are considered for all the parameter distributions involved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Iterative Refinement of Approximate Posterior for Training Directed Belief Networks

Deep directed graphical models, while a potentially powerful class of generative representations, are challenging to train due to difficult inference. Recent advances in variational inference that make use of an inference or recognition network have advanced well beyond traditional variational inference and Markov chain Monte Carlo methods. While these techniques offer higher flexibility as wel...

متن کامل

Regularized Variational Bayesian Learning of Echo State Networks with Delay&Sum Readout

In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The var...

متن کامل

Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition

We propose a method (TT-GP) for approximate inference in Gaussian Process (GP) models. We build on previous scalable GP research including stochastic variational inference based on inducing inputs, kernel interpolation, and structure exploiting algebra. The key idea of our method is to use Tensor Train decomposition for variational parameters, which allows us to train GPs with billions of induc...

متن کامل

Bayesian Gaussian Process Latent Variable Model

We introduce a variational inference framework for training the Gaussian process latent variable model and thus performing Bayesian nonlinear dimensionality reduction. This method allows us to variationally integrate out the input variables of the Gaussian process and compute a lower bound on the exact marginal likelihood of the nonlinear latent variable model. The maximization of the variation...

متن کامل

Expectation-Maximization approaches to independent component analysis

Expectation–Maximization (EM) algorithms for independent component analysis are presented in this paper. For super-Gaussian sources, a variational method is employed to develop an EM algorithm in closed form for learning the mixing matrix and inferring the independent components. For sub-Gaussian sources, a symmetrical form of the Pearson mixture model (Neural Comput. 11 (2) (1999) 417–441) is ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003